Goto

Collaborating Authors

 new surrogate risk


beta-risk: a New Surrogate Risk for Learning from Weakly Labeled Data

Neural Information Processing Systems

During the past few years, the machine learning community has paid attention to developping new methods for learning from weakly labeled data. This field covers different settings like semi-supervised learning, learning with label proportions, multi-instance learning, noise-tolerant learning, etc. This paper presents a generic framework to deal with these weakly labeled scenarios. We introduce the beta-risk as a generalized formulation of the standard empirical risk based on surrogate margin-based loss functions. This risk allows us to express the reliability on the labels and to derive different kinds of learning algorithms. We specifically focus on SVMs and propose a soft margin beta-svm algorithm which behaves better that the state of the art.



Reviews: beta-risk: a New Surrogate Risk for Learning from Weakly Labeled Data

Neural Information Processing Systems

UPDATE AFTER REBUTTAL: I still feel the value of the framework isn't fully convincing. My basic issue is that for the weakly supervised scenarios that don't already have principled algorithms, the precise reason the provided formulation is superior is unclear. For example in SSL the proposed method has a very similar flavour to self-training. I like however that there is an attempt at an unified approach to a range of such problems, and it's possible that one could do something interesting with this framework in future work. On the technical side, I still don't quite get the optimisation proposed for beta (Line 144).


β-risk: a New Surrogate Risk for Learning from Weakly Labeled Data

Neural Information Processing Systems

During the past few years, the machine learning community has paid attention to developing new methods for learning from weakly labeled data. This field covers different settings like semi-supervised learning, learning with label proportions, multi-instance learning, noise-tolerant learning, etc. This paper presents a generic framework to deal with these weakly labeled scenarios. We introduce the β-risk as a generalized formulation of the standard empirical risk based on surrogate marginbased loss functions. This risk allows us to express the reliability on the labels and to derive different kinds of learning algorithms. We specifically focus on SVMs and propose a soft margin β-SVM algorithm which behaves better that the state of the art.


beta-risk: a New Surrogate Risk for Learning from Weakly Labeled Data

Zantedeschi, Valentina, Emonet, Rémi, Sebban, Marc

Neural Information Processing Systems

During the past few years, the machine learning community has paid attention to developping new methods for learning from weakly labeled data. This field covers different settings like semi-supervised learning, learning with label proportions, multi-instance learning, noise-tolerant learning, etc. This paper presents a generic framework to deal with these weakly labeled scenarios. We introduce the beta-risk as a generalized formulation of the standard empirical risk based on surrogate margin-based loss functions. This risk allows us to express the reliability on the labels and to derive different kinds of learning algorithms.


beta-risk: a New Surrogate Risk for Learning from Weakly Labeled Data

Zantedeschi, Valentina, Emonet, Rémi, Sebban, Marc

Neural Information Processing Systems

During the past few years, the machine learning community has paid attention to developping new methods for learning from weakly labeled data. This field covers different settings like semi-supervised learning, learning with label proportions, multi-instance learning, noise-tolerant learning, etc. This paper presents a generic framework to deal with these weakly labeled scenarios. We introduce the beta-risk as a generalized formulation of the standard empirical risk based on surrogate margin-based loss functions. This risk allows us to express the reliability on the labels and to derive different kinds of learning algorithms. We specifically focus on SVMs and propose a soft margin beta-svm algorithm which behaves better that the state of the art.